Skip to content

fix: default OpenAI provider for /openai/v1/responses WS path (#2995)#3006

Closed
thiscantbeserious wants to merge 1 commit intomaximhq:mainfrom
thiscantbeserious:fix/2995-ws-bare-model-integration
Closed

fix: default OpenAI provider for /openai/v1/responses WS path (#2995)#3006
thiscantbeserious wants to merge 1 commit intomaximhq:mainfrom
thiscantbeserious:fix/2995-ws-bare-model-integration

Conversation

@thiscantbeserious
Copy link
Copy Markdown

Summary

The WebSocket responses handler rejected bare model strings (e.g. gpt-4o) on the /openai/v1/responses integration path with a 400 "failed to parse model string" error. This PR fixes the two ParseModelString call sites so the integration path infers openai as the default provider from the request URL, matching the behavior of the HTTP path.

Changes

  • Added inferDefaultProviderFromPath(path string) schemas.ModelProvider helper in wsresponses.go. Returns schemas.OpenAI for paths starting with /openai/, and an empty provider for all other paths (including the unified /v1/responses, which is multi-provider by design and intentionally requires an explicit provider/model prefix).
  • Threaded a defaultProvider schemas.ModelProvider parameter from handleUpgrade (where ctx.Path() is available) through eventLoop and handleResponseCreate to convertEventToRequest. Both ParseModelString call sites (lines 177 and 422 on origin/main) now receive the correct default.
  • Design choice: path-sniff approach over storing a field on session or config, because the fix is entirely local to wsresponses.go, requires no new types, and reads cleanly at the call site. The session object is not the right place for request-scoped routing data. The parameter threading is shallow (3 levels).

Type of change

  • Bug fix

Affected areas

  • Transports (HTTP)

How to test

Run the handler unit tests, which cover the new helper and the parse behavior on both paths:

cd transports
go build ./bifrost-http/handlers/...
go vet ./bifrost-http/handlers/...
go test -count=1 -cover ./bifrost-http/handlers/...

Expected output:

ok  github.com/maximhq/bifrost/transports/bifrost-http/handlers  ~1.2s  coverage: 8.2% of statements

Live reproduction (requires a running Bifrost instance with an OpenAI provider configured and allow_direct_keys: true):

# Before fix: returns 400 "failed to parse model string"
printf '{"type":"response.create","model":"gpt-4o","input":[{"type":"message","role":"user","content":[{"type":"input_text","text":"hi"}]}]}\n' | \
  websocat -n1 "ws://localhost:8080/openai/v1/responses" \
  -H='Authorization: Bearer sk-your-key'

# After fix: reaches the upstream (401 on a fake key, 200 on a real key)

Screenshots/Recordings

N/A

Breaking changes

  • No

Related issues

Closes #2995. Extracted from #2775.

Security considerations

The path-sniff only changes which default provider value is passed to ParseModelString. It does not affect auth, key selection, or any other security-relevant logic. The unified /v1/responses path is unchanged.

Checklist

  • I read docs/contributing/README.md and followed the guidelines
  • I added/updated tests where appropriate
  • I updated documentation where needed
  • I verified builds succeed (Go and UI)
  • I verified the CI pipeline passes locally if applicable

…q#2995)

The WebSocket responses handler called ParseModelString with an empty
default provider at both call sites (handleResponseCreate line 177,
convertEventToRequest line 422). For a bare model string like "gpt-4o"
this returned provider="" which the guard treated as a fatal parse
failure, producing a 400 error on /openai/v1/responses.

The fix threads a defaultProvider value inferred from the upgrade
request path through handleUpgrade -> eventLoop -> handleResponseCreate
-> convertEventToRequest. A new helper inferDefaultProviderFromPath
returns schemas.OpenAI for paths starting with /openai/ and an empty
provider for all other paths (including the unified /v1/responses, which
is multi-provider by design and intentionally requires an explicit
provider/model prefix).

Both ParseModelString call sites now receive the correct default, so
bare model strings are accepted on the integration-scoped path without
affecting the unified path behavior.

Tests added for inferDefaultProviderFromPath (integration and unified
paths) and convertEventToRequest (bare model accepted on integration
path, rejected on unified path, prefixed models work on both paths,
Anthropic-prefixed model works on unified path).

Closes maximhq#2995
@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Apr 24, 2026

Important

Review skipped

Draft detected.

Please check the settings in the CodeRabbit UI or the .coderabbit.yaml file in this repository. To trigger a single review, invoke the @coderabbitai review command.

⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: e527275c-0642-4eb9-b30e-6c4404ad962f

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.

Use the checkbox below for a quick retry:

  • 🔍 Trigger review
✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

thiscantbeserious added a commit to thiscantbeserious/bifrost that referenced this pull request Apr 24, 2026
…ximhq#3006)

This fix for maximhq#2995 was originally included in this feature branch as a
side effect. It has been extracted into a focused PR maximhq#3006 against main
that implements the narrower path-sniff approach requested by the
maintainer on maximhq#2959 (only the integration path defaults to OpenAI, the
unified path stays strict).

Restoring the empty default at both call sites here so this branch no
longer overlaps with maximhq#3006.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[Bug]: /openai/v1/responses WebSocket path rejects bare model strings with a 400 error

1 participant